1 Overview of the tutorial

Last time, you conducted the Nanopore sequencing. For the following three sessions we will learn:

  • How to use Orion and conduct genome analysis

  • Quality check, Read filtering, mapping to the reference genome and variant calling

  • How to interpret summary statistics of Nanopore sequence data

  • How to interpret variant data

Review the previous practice

Prepare your tools on Orion

2 Reads quality check

  1. Quality check -> Trimming of low quality reads -> Quality check

  2. Compare the overall reads quality between four conditions

2.1 Connect to Orion and the prepare the tools

Go to https://orion.nmbu.no/ at NMBU or with VPN.

In the Terminal/Command prompt, go to your directory. Review: the concept of current directry

cd your_directory

Let’s make a directory for analysis and enter in it.

mkdir pig_analysis # make directory "pig_analysis"
cd pig_analysis # set the current directory "pig_analysis"

Now, you will inspect the fastq file from your experiment, which contains Nanopore read information.

2.2 Check the read quality by Nanoplot

2.2.1 Browse the inside of the read (fastq) file

Review: look into a file content in a command line

for teachers: please_update_the_file_location_

zcat pigdata_fastq.gz | more

How a fastq file looks.

Each entry in a FASTQ files consists of 4 lines:

  1. A sequence identifier with information about the sequencing run. (run time, run ID, cflow cell id … )

2.The sequence (the base calls; A, C, T, G and N).

  1. A separator, which is simply a plus (+) sign.

  2. The base call quality scores. These are Phred +33 encoded, using ASCII characters to represent the numerical quality scores.” quality score sheet

2.2.2 Get basic stats of the fastq file

“zcat”-> look inside

“wc” -> word count

“-l” -> line

zcat /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz | wc -l

2.3 Discussion Point

Now you got the number of lines in the fastq file.

How many sequence reads are in the fastq file?

What is the quality of the first 5bp ? what is the quality of bp between XX and YY ? Why do you think they are different ?

Need Help?

We see that there are 96000 lines in the fastq file.

As we learned that “each entry in a FASTQ files consists of 4 lines”, one read is corresponding to four lines. So in this file we have 96000/4 = 24000 reads.

2.4 Quality check by Nanoplot

The original fastq files may contain low quality reads. In this step, we will use “Nanoplot” to see the quality and lentgh of each read.

“Singularity” is a toolset on Orion to execute software. A variety of different bioinformatics tools are available in Singularity.

Make a slurm script like below and run it.

Review: make a slurm script

Review: run a slurm script by sbatch sbatch



#!/bin/bash
#SBATCH --job-name=Nanoplot  # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G 
#SBATCH --ntasks=1   
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END

##Activate conda environment
module load Miniconda3 && eval "$(conda shell.bash hook)"

### NB! Remember to use your own conda environment:

conda activate $SCRATCH/ToolBox/EUKVariantDetection 
##Activating conda environments
conda activate $COURSES/BIO326/BestPracticesOrion/BLASTConda
echo "Working with this $CONDA_PREFIX environmet ..."

NanoPlot -t 8  --fastq /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz --plots dot   --no_supplementary --no_static --N50 -p before

Nanoplot will generate the result files, named “before”xxx. Lets look into them…

Review: File transfer between Orion and your computer

2.5 For teachers: please make ready-made result files and specify the location


# taking too long?
qlogin 

cp /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/beforeNanoPlot-report.html beforeNanoPlot-report.html

Open “beforeNanoPlot-report.html” on your local computer

Everything you need in case scripts do not work well for teachers: please specify the location

ls /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data

# use cp command to copy files

# or run the full slurm script
sbatch /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/Bio326_2023_full.slurm
  • Filter low quality reads and short reads

  • Map the reads to the reference genome

  • Detect variants

for teachers: please replace singularity to conda

2.6 Filtering by Nanofilt


#!/bin/bash
#SBATCH --job-name=Nanoplot  # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G 
#SBATCH --ntasks=1   
#SBATCH --mail-type=END

gunzip -c /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/pig_demodata_fastq.gz | singularity exec  /cvmfs/singularity.galaxyproject.org/all/nanofilt:2.8.0--py_0 NanoFilt -q 10 -l 500 | gzip > cleaned.pig.fastq.gz

-l, Filter on a minimum read length

-q, Filter on a minimum average read quality score

In this case, we are removing reads lower than quality score 10 and shorter than 500 bases.

2.7 Compare thesequences before and after cleaning

Run Nanoplot again on the cleaned sequences.

Need help?

#!/bin/bash
#SBATCH --job-name=Nanoplot  # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G 
#SBATCH --ntasks=1   
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END

singularity exec /cvmfs/singularity.galaxyproject.org/all/nanoplot:1.41.0--pyhdfd78af_0 NanoPlot -t 8  --fastq cleaned.pig.fastq.gz  --N50  --no_supplementary --no_static  --plots dot   -p after

Open “afterNanoPlot-report.html” on your local computer.


# taking too long?
qlogin 

cp /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/afterNanoPlot-report.html afterNanoPlot-report.html

2.8 Discussion Point

Did you see the difference of read and quality distribution between before and after the filtering?

for teachers: please make the quality check results file for all experiments in a shared directory and specify the location

In case Singularity does not work … use conda

3 Mapping to the reference genome

3.1 run Minimap and map the reads to the reference genome

for teachers: please replace the bull ref. genome (ver.2023) to pig genome for teachers: please make four input fastq files, merging multiple fastq files under the same conditions, Vortex, Needle, Freeze and Crtl, cleaned.pig.fastq.gz should be replaced to the four files



#!/bin/bash
#SBATCH --job-name=Nanoplot  # sensible name for the job
#SBATCH --mail-user=yourname@nmbu.no # Email me when job is done.
#SBATCH --mem=12G 
#SBATCH --ntasks=1   
#SBATCH --cpus-per-task=8
#SBATCH --mail-type=END

singularity exec /cvmfs/singularity.galaxyproject.org/all/minimap2:2.24--h7132678_1 minimap2 -t 8 -a -a /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/Bos_taurus.fa.gz  cleaned.pig.fastq.gz > pig.sam
# updated (the reference Bos taurus fasta location)


# convert the sam file to bam format
singularity exec  /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools view -S -b pig.sam > pig0.bam

## sort the bam file
singularity exec  /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools sort pig0.bam -o pig.bam

# index the bam file
singularity exec  /cvmfs/singularity.galaxyproject.org/all/samtools:1.16.1--h6899075_1 samtools index -M  pig.bam


# Variant Calling using Sniffles
singularity exec /cvmfs/singularity.galaxyproject.org/all/sniffles:2.0.7--pyhdfd78af_0 sniffles --input  pig.bam --vcf pig.vcf

# taking too long?
qlogin 
ls /net/fs-2/scale/OrionStore/Courses/BIO326/EUK/pig_analysis/demo_data/

# and copy the file you need (the final product is .vcf file)

Now you got the variant file!

4 Investigate the variants

resulting vcf file

  • Celian will explain how to read a vcf file Go to mentimeter (note to self : put link here)
# INFO field

grep '^##' pig.vcf | tail -n 20

# variants
grep -v '^##' pig.vcf | more

Important parameters

1 16849578 : location of the variant

SVTYPE=DEL;SVLEN=-60 : size and type of the variant

0/1 : genotype

(you can open a vcf file in notepad, excel etc.)

Now you have variants! Lets see what genes are affected by the variants.

First we will select a random variant to investigate

#Check the number of variant in the file

NBVAR=$(bcftools index -n pig.vcf)

## sample a random number

RANDOMVAR=$(echo $((RANDOM % $NBVAR + 1)))

## let's check the variant sampled

bcftools view -H pig.vcf | sed -n ${RANDOMVAR}p

4.1 Estimate the effect of variants

Go to VEP (Variant Effect Predictor)

Variant Effect predictor tells us where in the genome the discovered variants are located (genic, regulartory, etc…)

Select “cow” as the reference species.

Upload: pig.vcf - downloaded from Orion or the section above as the file to investigate.

There are 428 variants; 88 genes are affected by these varaints.

What are the most affected genes?

Click “Filters” and set “Impact is HIGH” to select highly impact variants.

There are some frameshift/transcript ablation variants.

Let’s closely investigate your variant !

Find your variant by downloading the .txt file